In response to the inability of existing 3D shape reconstruction models to effectively fuse global spatio-temporal information, a Depth Focus Volume (DFV) module was proposed to retain the transition information of focus and defocus, on this basis, a Global Spatio-Temporal Feature Coupling (GSTFC) model was proposed to extract local and global spatio-temporal feature information of multi-depth-of-field image sequences. Firstly, the 3D-ConvNeXt module and 3D convolutional layer were interspersed in the shrinkage path to capture multi-scale local spatio-temporal features. Meanwhile, the 3D-SwinTransformer module was added to the bottleneck module to capture the global correlations of local spatio-temporal features of multi-depth-of-field image sequences. Then, the local spatio-temporal features and global correlations were fused into global spatio-temporal features through the adaptive parameter layer, which were input into the expansion path to guide and generate focus volume. Finally, the sequence weight information of the focus volume was extracted by DFV and the transition information of focus and defocus was retained to obtain the final depth map. Experimental results show that GSTFC decreases the Root Mean Square Error (RMSE) index by 12.5% on FoD500 dataset compared with the state-of-the-art All-in-Focus Depth Net (AiFDepthNet) model, and retains more depth-of-field transition relationships compared with the traditional Robust Focus Volume Regularization in Shape from Focus (RFVR-SFF) model.
Concerning the problems of the lack of standard words, fuzzy semantics and feature sparsity in news topic text, a news topic text classification method based on Bidirectional Encoder Representations from Transformers(BERT) and Feature Projection network(FPnet) was proposed. The method includes two implementation modes. In mode 1: the multiple-layer fully connected layer features were extracted from the output of news topic text at BERT model, and the final extracted text features were purified with the combination of feature projection method, thereby strengthening the classification effect. In mode 2, the feature projection network was fused in the hidden layer inside the BERT model for feature projection, so that the classification features were enhanced and purified through the hidden layer feature projection. Experimental results on Toutiao, Sohu News, THUCNews-L、THUCNews-S datasets show that the two above modes have better performance in accuracy and macro-averaging F1 value than baseline BERT method with the highest accuracy reached 86.96%, 86.17%, 94.40% and 93.73% respectively, which proves the feasibility and effectiveness of the proposed method.
Traditional machine learning methods fail to fully dig out semantic information and association information when classifying the sentiment polarity of online comment text. Although the existing deep learning methods can extract the semantic information and contextual information, the process is often one-way and there are some deficiencies in the process of obtaining the deep semantic information of comment text. Aiming at the above problems, a text sentiment analysis method was proposed by combining generalized autoregressive pretraining for language understanding model (XLNet) and RCNN (Recurrent Convolutional Neural Network). Firstly, XLNet was used to represent the text features. And by introducing the segment-level recurrence mechanism and relative position information encoding, the contextual information of comment text was fully considered, thereby improving the expression ability of text features effectively. Then, RCNN was used to train the text features in both directions and extract the context semantic information of the text at a deeper level, thereby improving the comprehensive performance in the sentiment analysis task. The experiments with the proposed method were carried out on three public datasets weibo-100k, waimai-10k and ChnSentiCorp. The results show that the accuracy reaches 96.4%, 91.8% and 92.9% respectively, which proves the effectiveness of the proposed method in the sentiment analysis task.
In order to solve the problems of feature selection ReliefF algorithm, such as poor algorithm stability and low classification accuracy for selected feature subsets caused by using Euclidean distance to select the nearest neighbor samples, an MICReliefF (Maximum Information Coefficient-ReliefF) algorithm based on Maximum Information Coefficient (MIC) was proposed. At the same time, the classification accuracy of the Support Vector Machine (SVM) model was used as the evaluation index, and the optimal feature subset was automatically determined by multiple optimizations, thereby realizing the interactive optimization of the MICReliefF algorithm and the classification model, that is the MICReliefF-SVM automatic feature selection algorithm. The performance of the MICReliefF-SVM algorithm was verified on several UCI public datasets. Experimental results show that the MICReliefF-SVM automatic feature selection algorithm cannot only filter out more redundant features, but also select the feature subsets with good stability and generalization ability. Compared with Random Forest (RF), max-Relevance and Min-Redundancy (mRMR), Correlation-based Feature Selection (CFS) and other classical feature selection algorithms, MICReliefF algorithm has higher classification accuracy.
Concerning the present situation that Quality of Service (QoS) evaluation methods ignore the implicit service quality assessment and lead to inaccurate results, a service evaluation method that comprehensively considered explicit and implicit quality attributes was put forward. Explicit quality attributes were expressed in vector form, using service quality assessment model, after quantization, normalization, then evaluation values were calculated; and implicit quality attributes were expressed according to the evaluation on similar users' recommendation. The users' credibility and difference between old and new users were considered in the evaluation process. Finally the explicit and implicit quality evaluation was regarded as the QoS evaluation results. The experiments were performed in comparison with three algorithms by using one million Web Service QoS data. The simulation results show that the proposed method has certain feasibility and accuracy.
In order to avoid transmission collisions and improve energy efficiency for periodic report Wireless Sensor Network (WSN), a Medium Access Control (MAC) protocol with network utility maximization and collision avoidance called UM-MAC was proposed. UM-MAC used Time Division Multiple Access (TDMA) scheduling mechanism and introduced the utility model into the slot assignment process. A utility maximization problem of joint link reliability and energy consumption optimization based on utility model was put forward. To handle it, a heuristic algorithm was proposed to make the network to quickly find out a slot scheduling strategy which maximize network utility and avoid transmission collisions. Comparison experiments among UM-MAC, S-MAC and CA(Collision Avoidance)-MAC protocols were conducted under networks with different nodes, where UM-MAC got larger network utility and higher average packet successful delivery ratio, the lifetime of UM-MAC was between S-MAC and CA-MAC, while its average transmission delay increased under networks with defferent loads. The simulation results show that UM-MAC can achieve collision avoidance and improve network performance in terms of packet successful delivery ratio and energy efficiency; meanwhile, the TDMA-based protocol is not better than competition-based protocol in low load networks.